217 research outputs found

    Visual Distance Estimation in Static Compared to Moving Virtual Scenes

    Get PDF
    Visual motion is used to control direction and speed of self-motion and time-to-contact with an obstacle. In earlier work, we found that human subjects can discriminate between the distances of different visually simulated self-motions in a virtual scene. Distance indication in terms of an exocentric interval adjustment task, however, revealed linear correlationbetween perceived and indicated distances but with a profound distance underestimation. One possible explanation for this underestimation is the perception of visual space in virtual environments. Humans perceive visual space in natural scenes as curved, and distances are increasingly underestimated with increasing distance from the observer. Such spatial compression may also exist in our virtual environment. We therefore surveyed perceived visual space in a static virtual scene. We asked observers to compare two horizontal depth intervals, similar to experiments performed in natural space. Subjects had to indicate the size of one depth interval relative to a second interval. Our observers perceived visual space in the virtual environment as compressed, similar to the perception found in natural scenes. However, the nonlinear depth function we found can not explain the observed distance underestimation of visual simulated self-motions in the same environment.El movimiento visual se emplea en el control de la dirección y la velocidad de la autolocomoción y, también, para conocer el tiempo de contacto con un obstáculo. En trabajos anteriores encontramos que los observadores humanos pueden discriminar entre las distancias de diferentes auto-locomociones simuladas visualmente en una escena virtual. La indicación de la distancia mediante una tarea de ajuste de intervalo exocéntrico, sin embargo, reveló una correlación lineal entre las distancias percibidas y las indicadas, pero con una gran subestimación de la distancia. Una posible explicación de esta subestimación se basa en las características de la percepción visual del espacio en ambientes virtuales. En las escenas naturales los humanos percibimos el espacio visual como curvado, y las distancias se subestiman con el incremento de la separación respecto al observador. Esta compresión espacial también puede existir en nuestro ambiente virtual. Por ello, se decidió evaluar el espacio visual percibido en una escena estática virtual. Pedimos a los observadores que comparasen dos intervalos de profundidad horizontal, similares a experimentos llevados a cabo en el espacio natural. Los sujetos debían indicar el tamaño de un intervalo de profundidad con respecto a un segundo intervalo. Nuestros observadores percibían el espacio visual en el ambiente virtual como comprimido, similar a la percepción encontrada en escenas naturales. Sin embargo, la función no lineal de profundidad que ncontramos no puede explicar la subestimación observada de la distancia de las autolocomociones visuales simuladas en el mismo ambiente

    Adaptation and mislocalization fields for saccadic outward adaptation in humans

    Get PDF
    Adaptive shortening of a saccade influences the metrics of other saccades within a spatial window around the adapted target. Within this adaptation field visual stimuli presented before an adapted saccade are mislocalized in proportion to the change of the saccade metric. We investigated the saccadic adaptation field and associated localization changes for saccade lengthening, or outward adaptation. We measured the adaptation field for two different saccade adaptations (14 deg to 20 deg and 20 deg to 26 deg) by testing transfer to 34 different target positions. We measured localization judgements by asking subjects to localize a probe flashed before saccade onset. The amount of adaptation transfer differed for different target locations. It increased with increases of the horizontal component of the saccade and remained largely constant with deviation of the vertical component of the saccade. Mislocalization of probes inside the adaptation field was correlated with the amount of adaptation of saccades to the probe location. These findings are consistent with the assumption that oculomotor space and perceptual space are linked to each other

    Saccadic Adaptation Is Associated with Starting Eye Position

    Full text link
    Saccadic adaptation is the motor learning process that keeps saccade amplitudes on target. This process is eye position specific: amplitude adaptation that is induced for a saccade at one particular location in the visual field transfers incompletely to saccades at other locations. In our current study, we investigated wether this eye position signal corresponds to the initial or to the final eye position of the saccade. Each case would have different implications on the mechanisms of adaptation. The initial eye position is not directly available, when the adaptation driving post saccadic error signal is received. On the other hand the final eye position signal is not available, when the motor command for the saccade is calculated. In six human subjects we adapted a saccade of 15 degree amplitude that started at a constant position. We then measured the transfer of adaptation to test saccades of 10 and 20 degree amplitude. In each case we compared test saccades that matched the start position of the adapted saccade to those that matched the target of the adapted saccade. We found significantly more transfer of adaptation to test saccades with the same start position than to test saccades with the same target position. The results indicate that saccadic adaptation is specific to the initial eye position. This is consistent with a previously proposed effect of gain field modulated input from areas like the frontal eye field, the lateral intraparietal area and the superior colliculus into the cerebellar adaptation circuitry

    Peri-saccadic compression to two locations in a two-target choice saccade task

    Get PDF
    When visual stimuli are presented at the onset of a saccadic eye movement they are seen compressed onto the target location of the saccade. This peri-saccadic compression is believed to result from internal feedback pathways between oculomotor and visual areas of the brain. This feedback enhances vision around the saccade target at the expense of localization ability in other regions of the visual field. Although saccades can be targeted at only one object at a time, often multiple potential targets are available in a visual scene, and the oculomotor system has to choose which target to look at. If two targets are available, preparatory activity builds-up at both target locations in oculomotor maps. Here we show that, in this situation, two foci of compression develop, independent of which of the two targets is eventually chosen for the saccade. Our results suggest that theories that use oculomotor feedback as efference copy signals for upcoming eye movements should take the possibility into account that multiple feedback signals from potential targets may occur in parallel before the execution of a saccade

    Translation and articulation in biological motion perception

    Full text link
    Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection

    Saccadic Adaptation to Moving Targets

    Get PDF
    Saccades are so called ballistic movements which are executed without online visual feedback. After each saccade the saccadic motor plan is modified in response to post-saccadic feedback with the mechanism of saccadic adaptation. The post-saccadic feedback is provided by the retinal position of the target after the saccade. If the target moves after the saccade, gaze may follow the moving target. In that case, the eyes are controlled by the pursuit system, a system that controls smooth eye movements. Although these two systems have in the past been considered as mostly independent, recent lines of research point towards many interactions between them. We were interested in the question if saccade amplitude adaptation is induced when the target moves smoothly after the saccade. Prior studies of saccadic adaptation have considered intra-saccadic target steps as learning signals. In the present study, the intra-saccadic target step of the McLaughlin paradigm of saccadic adaptation was replaced by target movement, and a post-saccadic pursuit of the target. We found that saccadic adaptation occurred in this situation, a further indication of an interaction of the saccadic system and the pursuit system with the aim of optimized eye movements

    Fixation related shifts of perceptual localization counter to saccade direction

    Get PDF
    Perisaccadic compression of the perceived location of flashed visual stimuli toward a saccade target occurs from about 50 ms before a saccade. Here we show that between 150 and 80 ms before a saccade, perceived locations are shifted toward the fixation point. To establish the cause of the ‘‘reverse’’ presaccadic perceptual distortion, participants completed several versions of a saccade task. After a cue to saccade, a probe bar stimulus was briefly presented within the saccade trajectory. In Experiment 1 participants made (a) overlap saccades with immediate return saccades, (b) overlap saccades, and (c) step saccades. In Experiment 2 participants made gap saccades in complete darkness. In Experiment 3 participants maintained fixation with the probe stimuli masked at various interstimulus intervals. Participants indicated the bar’s location using a mouse cursor. In all conditions in Experiment 1 presaccadic compression was preceded by compression toward the initial fixation. In Experiment 2, saccadic compression was maintained but the preceding countercompression was not observed. Stimuli masked at fixation were not compressed. This suggests the two opposing compression effects are related to the act of executing an eye movement. They are also not caused by the requirement to make two sequential saccades ending at the initial fixation location and are not caused by continuous presence of the fixation markers. We propose that countercompression is related to fixation activity and is part of the sequence of motor preparations to execute a cued saccade

    The influence of image content on oculomotor plasticity

    Full text link
    When we observe a scene, we shift our gaze to different points of interest via saccadic eye movements. Saccades provide high resolution views of objects and are essential for vision. The successful view of an interesting target might constitute a rewarding experience to the oculomotor system. We measured the influence of image content on learning efficiency in saccade control. We compared meaningful pictures to luminance and spatial frequency–matched random noise images in a saccadic adaptation paradigm. In this paradigm a shift of the target during the saccades results in a gradual increase of saccade amplitude. Stimuli were masked at different times after saccade onset. For immediate masking of the stimuli, as well as for their permanent visibility, saccadic adaptation was similar for both types of targets. However, when stimuli were masked 200 ms after saccade onset, adaptation of saccades directed toward the meaningful target stimuli was significantly greater than that of saccades directed toward noise targets. Thus, the percept of a meaningful image at the saccade landing position facilitates learning of the appropriate parameters for saccadic motor control when time constraints exist. We conclude that oculomotor learning, which is traditionally considered a low-level and highly automatized process, is modulated by the visual content of the image

    Perception of biological motion from size-invariant body representations

    Full text link
    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion

    How deeply do we include robotic agents in the self?

    Get PDF
    In human–human interactions, a consciously perceived high degree of self–other overlap is associated with a higher degree of integration of the other person's actions into one's own cognitive representations. Here, we report data suggesting that this pattern does not hold for human–robot interactions. Participants performed a social Simon task with a robot, and afterwards indicated the degree of self–other overlap using the Inclusion of the Other in the Self (IOS) scale. We found no overall correlation between the social Simon effect (as an indirect measure of self–other overlap) and the IOS score (as a direct measure of self–other overlap). For female participants we even observed a negative correlation. Our findings suggest that conscious and unconscious evaluations of a robot may come to different results, and hence point to the importance of carefully choosing a measure for quantifying the quality of human–robot interactions
    corecore